Search Results: "joey"

9 October 2017

Antonio Terceiro: pristine-tar updates

Introduction pristine-tar is a tool that is present in the workflow of a lot of Debian people. I adopted it last year after it has been orphaned by its creator Joey Hess. A little after that Tomasz Buchert joined me and we are now a functional two-person team. pristine-tar goals are to import the content of a pristine upstream tarball into a VCS repository, and being able to later reconstruct that exact same tarball, bit by bit, based on the contents in the VCS, so we don t have to store a full copy of that tarball. This is done by storing a binary delta files which can be used to reconstruct the original tarball from a tarball produced with the contents of the VCS. Ultimately, we want to make sure that the tarball that is uploaded to Debian is exactly the same as the one that has been downloaded from upstream, without having to keep a full copy of it around if all of its contents is already extracted in the VCS anyway. The current state of the art, and perspectives for the future pristine-tar solves a wicked problem, because our ability to reconstruct the original tarball is affected by changes in the behavior of tar and of all of the compression tools (gzip, bzip2, xz) and by what exact options were used when creating the original tarballs. Because of this, pristine-tar currently has a few embedded copies of old versions of compressors to be able to reconstruct tarballs produced by them, and also rely on a ever-evolving patch to tar that is been carried in Debian for a while. So basically keeping pristine-tar working is a game of Whac-A-Mole. Joey provided a good summary of the situation when he orphaned pristine-tar. Going forward, we may need to rely on other ways of ensuring integrity of upstream source code. That could take the form of signed git tags, signed uncompressed tarballs (so that the compression doesn t matter), or maybe even a different system for storing actual tarballs. Debian bug #871806 contains an interesting discussion on this topic. Recent improvements Even if keeping pristine-tar useful in the long term will be hard, too much of Debian work currently relies on it, so we can t just abandon it. Instead, we keep figuring out ways to improve. And I have good news: pristine-tar has recently received updates that improve the situation quite a bit. In order to be able to understand how better we are getting at it, I created a "visualization of the regression test suite results. With the help of data from there, let s look at the improvements made since pristine-tar 1.38, which was the version included in stretch. pristine-tar 1.39: xdelta3 by default. This was the first release made after the stretch release, and made xdelta3 the default delta generator for newly-imported tarballs. Existing tarballs with deltas produced by xdelta are still supported, this only affects new imports. The support for having multiple delta generator was written by Tomasz, and was already there since 1.35, but we decided to only flip the switch after using xdelta3 was supported in a stable release. pristine-tar 1.40: improved compression heuristics pristine-tar uses a few heuristics to produce the smaller delta possible, and this includes trying different compression options. In the release Tomasz included a contribution by Lennart Sorensen to also try the --gnu, which gretly improved the support for rsyncable gzip compressed files. We can see an example of the type of improvement we got in the regression test suite data for delta sizes for faad2_2.6.1.orig.tar.gz: In 1.40, the delta produced from the test tarball faad2_2.6.1.orig.tar.gz went down from 800KB, almost the same size of tarball itself, to 6.8KB pristine-tar 1.41: support for signatures This release saw the addition of support for storage and retrieval of upstream signatures, contributed by Chris Lamb. pristine-tar 1.42: optionally recompressing tarballs I had this idea and wanted to try it out: most of our problems reproducing tarballs come from tarballs produced with old compressors, or from changes in compressor behavior, or from uncommon compression options being used. What if we could just recompress the tarballs before importing then? Yes, this kind of breaks the pristine bit of the whole business, but on the other hand, 1) the contents of the tarball are not affected, and 2) even if the initial tarball is not bit by bit the same that upstream release, at least future uploads of that same upstream version with Debian revisions can be regenerated just fine. In some cases, as the case for the test tarball util-linux_2.30.1.orig.tar.xz, recompressing is what makes it possible to reproduce the tarball (and thus import it with pristine-tar) possible at all: util-linux_2.30.1.orig.tar.xz can only be imported after being recompressed In other cases, if the current heuristics can t produce a reasonably small delta, recompressing makes a huge difference. It s the case for mumble_1.1.8.orig.tar.gz: with recompression, the delta produced from mumble_1.1.8.orig.tar.gz goes from 1.2MB, or 99% of the size to the original tarball, to 14.6KB, 1% of the size of original tarball Recompressing is not enabled by default, and can be enabled by passing the --recompress option. If you are using pristine-tar via a wrapper tool like gbp-buildpackage, you can use the $PRISTINE_TAR environment variable to set options that will affect any pristine-tar invocations. Also, even if you enable recompression, pristine-tar will only try it if the delta generations fails completely, of if the delta produced from the original tarball is too large. You can control what too large means by using the --recompress-threshold-bytes and --recompress-threshold-percent options. See the pristine-tar(1) manual page for details.

26 September 2017

Enrico Zini: Systemd timer units

These are the notes of a training course on systemd I gave as part of my work with Truelite. .timer units Configure activation of other units (usually a .service unit) at some given time. The functionality is similar to cron, with more features and a finer time granularity. For example, in Debian Stretch apt has a timer for running apt update which runs at a random time to distribute load on servers:
# /lib/systemd/system/apt-daily.timer
[Unit]
Description=Daily apt download activities
After=network-online.target
Wants=network-online.target
[Timer]
OnCalendar=*-*-* 6,18:00
RandomizedDelaySec=12h
Persistent=true
[Install]
WantedBy=timers.target
The corresponding apt-daily.service file then only runs when the system is on mains power, to avoid unexpected batter drains for systems like laptops:
# /lib/systemd/system/apt-daily.service
[Unit]
Description=Daily apt download activities
Documentation=man:apt(8)
ConditionACPower=true
[Service]
Type=oneshot
ExecStart=/usr/lib/apt/apt.systemd.daily update
Note that if you want to schedule tasks with an accuracy under a minute (for example to play a beep every 5 seconds when running on battery), you need to also configure AccuracySec= for the timer to a delay shorter than the default 1 minute. This is how to make your computer beep when on battery:
# /etc/systemd/system/beep-on-battery.timer
[Unit]
Description=Beeps every 10 seconds
[Install]
WantedBy=timers.target
[Timer]
AccuracySec=1s
OnUnitActiveSec=10s
# /etc/systemd/system/beep-on-battery.service
[Unit]
Description=Beeps when on battery
ConditionACPower=false
[Service]
Type=oneshot
ExecStart=/usr/bin/aplay /tmp/beep.wav
See:

31 August 2017

Paul Wise: FLOSS Activities August 2017

Changes

Issues

Review

Administration
  • myrepos: get commit/admin access from joeyh at DebConf17, add commit/admin access for other patch submitters, apply my stack of patches
  • Debian: fix weird log file issues, redirect hardware donor, cleaned up a weird dir, fix some OOB info, ask for TLS on meetings-archive.d.n, check an I/O error, restart broken stunnels, powercycle 1 borked machine,
  • Debian mentors: lintian/security updates & reboot
  • Debian wiki: remove some stray cache files, whitelist 3 email domains, whitelist some email addresses, disable 1 spammer account, disable 1 accounts with bouncing email,
  • Debian QA: apply patch to fix PTS watch file errors, deploy changes
  • Debian derivatives census: run scripts for Purism, remove some noise from logs, trigger a recheck, merge fix by Unit193, deploy changes
  • Openmoko: security updates, reboots, enable unattended-upgrades

Communication
  • Attended DebConf17 and provided some input in BoFs
  • Sent Misc Dev News #44
  • Invite Google gLinux (on IRC) to the Debian derivatives census
  • Welcome Sven Haardiek (of GreenboneOS) to the Debian derivatives census
  • Inquire about the status of Canaima

Sponsors The samba bug report was sponsored by my employer. All other work was done on a volunteer basis.

Chris Lamb: Free software activities in August 2017

Here is my monthly update covering what I have been doing in the free software world in August 2017 (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area. This month I:
  • Presented a status update at Debconf17 in Montr al, Canada alongside Holger Levsen, Maria Glukhova, Steven Chamberlain, Vagrant Cascadian, Valerie Young and Ximin Luo.
  • I worked on the following issues upstream:
    • glib2.0: Please make the output of gio-querymodules reproducible. (...)
    • gcab: Please make the output reproducible. (...)
    • gtk+2.0: Please make the immodules.cache files reproducible. (...)
    • desktop-file-utils: Please make the output reproducible. (...)
  • Within Debian:
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Worked on publishing our weekly reports. (#118, #119, #120, #121 & #122)

I also made the following changes to our tooling:
diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Use name attribute over path to avoid leaking comparison full path in output. (commit)
  • Add missing skip_unless_module_exists import. (commit)
  • Tidy diffoscope.progress and the XML comparator (commit, commit)

disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.

  • Add a simple autopkgtest smoke test. (commit)


Debian
Patches contributed
  • openssh: Quote the IP address in ssh-keygen -f suggestions. (#872643)
  • libgfshare:
    • SIGSEGV if /dev/urandom is not accessible. (#873047)
    • Add bindnow hardening. (#872740)
    • Support nodoc build profile. (#872739)
  • devscripts:
  • memcached: Add hardening to systemd .service file. (#871610)
  • googler: Tidy long and short package descriptions. (#872461)
  • gnome-split: Homepage points to domain-parked website. (#873037)

Uploads
  • python-django 1:1.11.4-1 New upstream release.
  • redis:
    • 4:4.0.1-3 Drop yet more non-deterministic tests.
    • 4:4.0.1-4 Tighten systemd/seccomp hardening.
    • 4:4.0.1-5 Drop even more tests with timing issues.
    • 4:4.0.1-6 Don't install completions to /usr/share/bash-completion/completions/debian/bash_completion/.
    • 4:4.0.1-7 Don't let sentinel integration tests fail the build as they use too many timers to be meaningful. (#872075)
  • python-gflags 1.5.1-3 If SOURCE_DATE_EPOCH is set, either use that as a source of current dates or the UTC-version of the file's modification time (#836004), don't call update-alternatives --remove in postrm. update debian/watch/Homepage & refresh/tidy the packaging.
  • bfs 1.1.1-1 New upstream release, tidy autopkgtest & patches, organising the latter with Pq-Topic.
  • python-daiquiri 1.2.2-1 New upstream release, tidy autopkgtests & update travis.yml from travis.debian.net.
  • aptfs 2:0.10-2 Add upstream signing key, refer to /usr/share/common-licenses/GPL-3 in debian/copyright & tidy autopkgtests.
  • adminer 4.3.1-2 Add a simple autopkgtest & don't install the Selenium-based tests in the binary package.
  • zoneminder (1.30.4+dfsg-2) Prevent build failures with GCC 7 (#853717) & correct example /etc/fstab entries in README.Debian (#858673).

Finally, I reviewed and sponsored uploads of astral, inflection, more-itertools, trollius-redis & wolfssl.

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 1049-1 for libsndfile preventing a remote denial of service attack.
  • Issued DLA 1052-1 against subversion to correct an arbitrary code execution vulnerability.
  • Issued DLA 1054-1 for the libgxps XML Paper Specification library to prevent a remote denial of service attack.
  • Issued DLA 1056-1 for cvs to prevent a command injection vulnerability.
  • Issued DLA 1059-1 for the strongswan VPN software to close a denial of service attack.

Debian bugs filed
  • wget: Please hash the hostname in ~/.wget-hsts files. (#870813)
  • debian-policy: Clarify whether mailing lists in Maintainers/Uploaders may be moderated. (#871534)
  • git-buildpackage: "pq export" discards text within square brackets. (#872354)
  • qa.debian.org: Escape HTML in debcheck before outputting. (#872646)
  • pristine-tar: Enable multithreaded compression in pristine-xz. (#873229)
  • tryton-meta: Please combine tryton-modules-* into a single source package with multiple binaries. (#873042)
  • azure-cli:
  • fwupd-tests: Don't ship test files to generic /usr/share/installed-tests dir. (#872458)
  • libvorbis: Maintainer fields points to a moderated mailing list. (#871258)
  • rmlint-gui: Ship a rmlint-gui binary. (#872162)
  • template-glib: debian/copyright references online source without quotation. (#873619)

FTP Team

As a Debian FTP assistant I ACCEPTed 147 packages: abiword, adacgi, adasockets, ahven, animal-sniffer, astral, astroidmail, at-at-clojure, audacious, backdoor-factory, bdfproxy, binutils, blag-fortune, bluez-qt, cheshire-clojure, core-match-clojure, core-memoize-clojure, cypari2, data-priority-map-clojure, debian-edu, debian-multimedia, deepin-gettext-tools, dehydrated-hook-ddns-tsig, diceware, dtksettings, emacs-ivy, farbfeld, gcc-7-cross-ports, git-lfs, glewlwyd, gnome-recipes, gnome-shell-extension-tilix-dropdown, gnupg2, golang-github-aliyun-aliyun-oss-go-sdk, golang-github-approvals-go-approval-tests, golang-github-cheekybits-is, golang-github-chzyer-readline, golang-github-denverdino-aliyungo, golang-github-glendc-gopher-json, golang-github-gophercloud-gophercloud, golang-github-hashicorp-go-rootcerts, golang-github-matryer-try, golang-github-opentracing-contrib-go-stdlib, golang-github-opentracing-opentracing-go, golang-github-tdewolff-buffer, golang-github-tdewolff-minify, golang-github-tdewolff-parse, golang-github-tdewolff-strconv, golang-github-tdewolff-test, golang-gopkg-go-playground-validator.v8, gprbuild, gsl, gtts, hunspell-dz, hyperlink, importmagic, inflection, insighttoolkit4, isa-support, jaraco.itertools, java-classpath-clojure, java-jmx-clojure, jellyfish1, lazymap-clojure, libblockdev, libbytesize, libconfig-zomg-perl, libdazzle, libglvnd, libjs-emojify, libjwt, libmysofa, libundead, linux, lua-mode, math-combinatorics-clojure, math-numeric-tower-clojure, mediagoblin, medley-clojure, more-itertools, mozjs52, openssh-ssh1, org-mode, oysttyer, pcscada, pgsphere, poppler, puppetdb, py3status, pycryptodome, pysha3, python-cliapp, python-coloredlogs, python-consul, python-deprecation, python-django-celery-results, python-dropbox, python-fswrap, python-hbmqtt, python-intbitset, python-meshio, python-parameterized, python-pgpy, python-py-zipkin, python-pymeasure, python-thriftpy, python-tinyrpc, python-udatetime, python-wither, python-xapp, pythonqt, r-cran-bit, r-cran-bit64, r-cran-blob, r-cran-lmertest, r-cran-quantmod, r-cran-ttr, racket-mode, restorecond, rss-bridge, ruby-declarative, ruby-declarative-option, ruby-errbase, ruby-google-api-client, ruby-rash-alt, ruby-representable, ruby-test-xml, ruby-uber, sambamba, semodule-utils, shimdandy, sjacket-clojure, soapysdr, stencil-clojure, swath, template-glib, tools-analyzer-jvm-clojure, tools-namespace-clojure, uim, util-linux, vim-airline, vim-airline-themes, volume-key, wget2, xchat, xfce4-eyes-plugin & xorg-gtest. I additionally filed 6 RC bugs against packages that had incomplete debian/copyright files against: gnome-recipes, golang-1.9, libdazzle, poppler, python-py-zipkin & template-glib.

18 August 2017

Mike Gabriel: @DebConf17: Ad-hoc BoF: Bits from the Debian+Ubuntu MATE Packaging Team

On Tuesday, late afternoon, at DebConf17, I offered an ad-hoc BoF about the current status of the MATE Desktop packaging efforts in Debian and Ubuntu. I need to get this written down, before DebConf17 feels too far away... Unfortunately, I scheduled that BoF with Joey Hess's talk about his post-Debian life, which attracted many people. So, only a small group of people came together to share and discuss about the current status of MATE in Debian and Ubuntu. Ongoing efforts around MATE in Debian and Ubuntu A quick summary of ongoing efforts was provided and also a collection of URLs for reporting bugs, looking up packaging status, etc. was listed: Cross-Distro Packaging Workflow The workflow of Debian and Ubuntu packaging in the MATE Packaging Team was described in detail (basically, all packages go through Debian, only exception being freeze states of this or that distro) and the benefit of the close cooperation between the two projects underlined. We reduce the packaging effort tremendously by working very closely together. In the past, we (Martin Wimpress and myself) also invited the people from Linux Mint to join our cross-distro packaging efforts, but so far to no avail. Also the packaging for the Parrot Security OS has recently been integrated in our packaging Git repository. Requested / Upcoming Improvements From people attending the BoF, I got these main inputs, which I hope to accomplish with the help of all, for Debian buster: Another project that I will also work on for Debian buster is proper Ayatana Indicator support for Debian's MATE Desktop. Credits Thanks to all the people attending the BoF for your sharings and input. Feel invited to contribute to our team's effort in improving the user experience of the MATE Destkop Environment in Debian (and Ubuntu, and ...). Also a big thanks to the people already on the team for bringing MATE in Debian to where it is now. Good work, folks!

9 August 2017

Joey Hess: unifying OS installation and configuration management

Three years ago, I realized that propellor (my configuration management system that is configured using haskell) could be used as an installer for Debian (or other versions of Linux). In propellor is d-i 2.0, I guessed it would take "a month and adding a few thousand lines of code". I've now taken that month, and written that code, and I presented the result at DebConf yesterday. I demoed propellor building a live Debian installation image, and then handed it off to a volenteer from the audience to play with its visual user interface and perform the installation. The whole demo took around 20 minutes, and ended with a standard Debian desktop installation. (Video) The core idea is to reuse the same configuration management system for several different purposes.
  1. Building a bootable disk image that can be used as both a live system and as an OS installer.
  2. Running on that live system, to install the target system. Which can just involve copying the live system to the target disk and then letting the configuration management system make the necessary changes to get from the live system configuration to the target system configuration.
  3. To support such things as headless arm boards, building customized images tuned for the target board and use case, that can then simply be copied to the board to install.
  4. Optionally, running on the installed system later, to futher customize it. Starting from the same configuration that produced the installed system in the first place.
There can be enourmous code reuse here, and improvements made for one of those will often benefit all the rest as well. Once everything is handled by configuration management, all user interface requirements become just a matter of editing the configuration. Including: That's the gist of it. Configuration management reused for installation and image building, and multiple editor interfaces to make it widely usable. I was glad, sitting in to a BoF session before my talk, that several people in Debian are already thinking along similar lines. And if Debian wanted to take this work and run with it, I'd be glad to assist as propellor's maintainer. But the idea is more important than the code and I hope my elaboration of it helps point a way if not the way. While what I've built installs Debian, little of it is Debian-specific. It would probably be easy to port it to Arch Linux, which propellor already supports. There are Linux-specific parts, so porting to FreeBSD would be harder, but propellor knows, at the type level which OSs properties support, which will ease porting. GuixSD and NixOS already use configuration management for installation, and were part of my inspiration. I've extended what they do in some ways (in other ways they remain far ahead).
The code is here. And here are some links to more details about what I built, and ideas encountered along the way:

3 August 2017

Joey Hess: home power monitoring

For years I've recorded solar panel data by hand. Filled two notebooks with columns of figures. My new charge controller, an EPsolar Tracer-BN, finally let me automate it.
morning activity; by 8 am the sun is still behind the hill but, 16 watts are being produced, and by 11:30 am, the battery bank is full
You can explore my home power data here: http://homepower.joeyh.name/
(click and drag to zoom) The web interface loads the RRD files into a web browser using javascriptRRD. I wrote a haskell program that drives the epsolar-tracer python library to poll for data, and stores it in RRD files. Could have used collectd or something, but the interface to the charge controller is currently a bit flakey and I have to be careful about retries and polling frequencies. Also I wanted full control over how much data is stored in the RRD files. Full source code

15 July 2017

Joey Hess: Functional Reactive Propellor

I wrote this code, and it made me super happy!
data Variety = Installer   Target
    deriving (Eq)
seed :: UserInput -> Versioned Variety Host
seed userinput ver = host "foo"
    & ver (   (== Installer) --> hostname "installer"
          < > (== Target)    --> hostname (inputHostname userinput)
          )
    & osDebian Unstable X86_64
    & Apt.stdSourcesList
    & Apt.installed ["linux-image-amd64"]
    & Grub.installed PC
    & XFCE.installed
    & ver (   (== Installer) --> desktopUser defaultUser
          < > (== Target)    --> desktopUser (inputUsername userinput)
          )
    & ver (   (== Installer) --> autostartInstaller )
This is doing so much in so little space and with so little fuss! It's completely defining two different versions of a Host. One version is the Installer, which in turn installs the Target. The code above provides all the information that propellor needs to convert a copy of the Installer into the Target, which it can do very efficiently. For example, it knows that the default user account should be deleted, and a new user account created based on the user's input of their name. The germ of this idea comes from a short presentation I made about propellor in Portland several years ago. I was describing RevertableProperty, and Joachim Breitner pointed out that to use it, the user essentially has to keep track of the evolution of their Host in their head. It would be better for propellor to know what past versions looked like, so it can know when a RevertableProperty needs to be reverted. I didn't see a way to address the objection for years. I was hung up on the problem that propellor's properties can't be compared for equality, because functions can't be compared for equality (generally). And on the problem that it would be hard for propellor to pull old versions of a Host out of git. But then I ran into the situation where I needed these two closely related hosts to be defined in a single file, and it all fell into place. The basic idea is that propellor first reverts all the revertible properties for other versions. Then it ensures the property for the current version. Another use for it would be if you wanted to be able to roll back changes to a Host. For example:
foos :: Versioned Int Host
foos ver = host "foo"
    & hostname "foo.example.com"
    & ver (   (== 1) --> Apache.modEnabled "mpm_worker"
          < > (>= 2) --> Apache.modEnabled "mpm_event"
          )
    & ver ( (>= 3)   --> Apt.unattendedUpgrades )
foo :: Host
foo = foos  version  (4 :: Int)
Versioned properties can also be defined:
foobar :: Versioned Int -> RevertableProperty DebianLike DebianLike
foobar ver =
    ver (   (== 1) --> (Apt.installed "foo" <!> Apt.removed "foo")
        < > (== 2) --> (Apt.installed "bar" <!> Apt.removed "bar")
        )
Notice that I've embedded a small DSL for versioning into the propellor config file syntax. While implementing versioning took all day, that part was super easy; Haskell config files win again! API documentation for this feature PS: Not really FRP, probably. But time-varying in a FRP-like way.
Development of this was sponsored by Jake Vosloo on Patreon.

11 July 2017

Joey Hess: bonus project

Little bonus project after the solar upgrade was replacing the battery box's rotted roof, down to the cinderblock walls. Except for a piece of plywood, used all scrap lumber for this project, and also scavenged a great set of hinges from a discarded cabinet. I hope the paint on all sides and an inch of shingle overhang will be enough to protect the plywood. Bonus bonus project to use up paint. (Argh, now I want to increase the size of the overflowing grape arbor. Once you start on this kind of stuff..) After finishing all that, it was time to think about this while enjoying this. (Followed by taking delivery of a dumptruck full of gravel -- 23 tons -- which it turns out was enough for only half of my driveway..)

27 June 2017

Joey Hess: 12 to 24 volt house conversion

Upgrading my solar panels involved switching the house from 12 volts to 24 volts. No reasonably priced charge controllers can handle 1 KW of PV at 12 volts. There might not be a lot of people who need to do this; entirely 12 volt offgrid houses are not super common, and most upgrades these days probably involve rooftop microinverters and so would involve a switch from DC to AC. I did not find a lot of references online for converting a whole house's voltage from 12V to 24V. To prepare, I first checked that all the fuses and breakers were rated for > 24 volts. (Actually, > 30 volts because it will be 26 volts or so when charging.) Also, I checked for any shady wiring, and verified that all the wires I could see in the attic and wiring closet were reasonably sized (10AWG) and in good shape. Then I:
  1. Turned off every light, unplugged every plug, pulled every fuse and flipped every breaker.
  2. Rewired the battery bank from 12V to 24V.
  3. Connected the battery bank to the new charge controller.
  4. Engaged the main breaker, and waited for anything strange.
  5. Screwed in one fuse at a time.
lighting The house used all fluorescent lights, and they have ballasts rated for only 12V. While they work at 24V, they might blow out sooner or overheat. In fact one died this evening, and while it was flickering before, I suspect the 24V did it in. It makes sense to replace them with more efficient LED lights anyway. I found some 12-24V DC LED lights for regular screw-in (edison) light fixtures. Does not seem very common; Amazon only had a few models and they shipped from China. Also, I ordered a 15 foot long, 300 LED strip light, which runs on 24V DC and has an adhesive backing. Great stuff -- it can be cut to different lengths and stuck anywhere. I installed some underneath the cook stove hood and the kitchen cabinets, which didn't have lights before. Similar LED strips are used in some desktop lamps. My lamp was 12V only (barely lit at 24V), but I was able to replace its LED strip, upgrading it to 24V and three times as bright. (Christmas lights are another option; many LED christmas lights run on 24V.) appliances My Lenovo laptop's power supply that I use in the house is a vehicle DC-DC converter, and is rated for 12-24V. It seems to be running fine at 26V, did not get warm even when charging the laptop up from empty. I'm using buck converters to run various USB powered (5V) ARM boxes such as my sheevaplug. They're quarter sized, so fit anywhere, and are very efficient. My satellite internet receiver is running with a large buck converter, feeding 12V to an inverter, feeding to a 30V DC power supply. That triple conversion is inneficient, but it works for now. The ceiling fan runs on 24V, and does not seem to run much faster than on 12V. It may be rated for 12-24V. Can't seem to find any info about it. The radio is a 12V car radio. I used a LM317 to run it on 24V, to avoid the RF interference a buck converter would have produced. This is a very inneficient conversion; half of the power is wasted as heat. But since I can stream internet radio all day now via satellite, I'll not use the FM radio very often. Fridge... still running on propane for now, but I have an idea for a way to build a cold storage battery that will use excess power from the PV array, and keep a fridge at a constant 34 degrees F. Next home improvement project in the queue.

26 June 2017

Joey Hess: DIY solar upgrade complete-ish

Success! I received the Tracer4215BN charge controller where UPS accidentially-on-purpose delivered it to a neighbor, and got it connected up, and the battery bank rewired to 24V in a couple hours. charge controller reading 66.1V at 3.4 amps on panels, charging battery at 29.0V at 7.6A Here it's charging the batteries at 220 watts, and that picture was taken at 5 pm, when the light hits the panels at nearly a 90 degree angle. Compare with the old panels, where the maximum I ever recorded at high noon was 90 watts. I've made more power since 4:30 pm than I used to be able to make in a day! \o/

23 June 2017

Joey Hess: PV array is hot

Only took a couple hours to wire up and mount the combiner box. PV combiner box with breakers Something about larger wiring like this is enjoyable. So much less fiddly than what I'm used to. PV combiner box wiring And the new PV array is hot! multimeter reading 66.8 DVC Update: The panels have an open circuit voltage of 35.89 and are in strings of 2, so I'd expect to see 71.78 V with only my multimeter connected. So I'm losing 0.07 volts to wiring, which is less than I designed for.

21 June 2017

Joey Hess: DIY professional grade solar panel installation

I've installed 1 kilowatt of solar panels on my roof, using professional grade eqipment. The four panels are Astronergy 260 watt panels, and they're mounted on IronRidge XR100 rails. Did it all myself, without help. house with 4 solar panels on roof I had three goals for this install:
  1. Cheap but sturdy. Total cost will be under $2500. It would probably cost at least twice as much to get a professional install, and the pros might not even want to do such a small install.
  2. Learn the roof mount system. I want to be able to add more panels, remove panels when working on the roof, and understand everything.
  3. Make every day a sunny day. With my current solar panels, I get around 10x as much power on a sunny day as a cloudy day, and I have plenty of power on sunny days. So 10x the PV capacity should be a good amount of power all the time.
My main concerns were, would I be able to find the rafters when installing the rails, and would the 5x3 foot panels be too unweildly to get up on the roof by myself. I was able to find the rafters, without needing a stud finder, after I removed the roof's vent caps, which exposed the rafters. The shingles were on straight enough that I could follow the lines down and drilled into the rafter on the first try every time. And I got the rails on spaced well and straight, although I could have spaced the FlashFeet out better (oops). My drill ran out of juice half-way, and I had to hack it to recharge on solar power, but that's another story. Between the learning curve, a lot of careful measurement, not the greatest shoes for roofing, and waiting for recharging, it took two days to get the 8 FlashFeet installed and the rails mounted. Taking a break from that and swimming in the river, I realized I should have been wearing my water shoes on the roof all along. Super soft and nubbly, they make me feel like a gecko up there! After recovering from an (unrelated) achilles tendon strain, I got the panels installed today. Turns out they're not hard to handle on the roof by myself. Getting them up a ladder to the roof by yourself would normally be another story, but my house has a 2 foot step up from the back retaining wall to the roof, and even has a handy grip beam as you step up. roof next to the ground with a couple of cinderblock steps The last gotcha, which I luckily anticipated, is that panels will slide down off the rails before you can get them bolted down. This is where a second pair of hands would have been most useful. But, I macguyvered a solution, attaching temporary clamps before bringing a panel up, that stopped it sliding down while I was attaching it. clamp temporarily attached to side of panel I also finished the outside wiring today. Including the one hack of this install so far. Since the local hardware store didn't have a suitable conduit to bring the cables off the roof, I cobbled one together from pipe, with foam inserts to prevent chafing. some pipe with 5 wires running through it, attached to the side of the roof While I have 1 kilowatt of power on my roof now, I won't be able to use it until next week. After ordering the upgrade, I realized that my old PWM charge controller would be able to handle less than half the power, and to get even that I would have needed to mount the fuse box near the top of the roof and run down a large and expensive low-voltage high-amperage cable, around OO AWG size. Instead, I'll be upgrading to a MPPT controller, and running a single 150 volt cable to it. Then, since the MPPT controller can only handle 1 kilowatt when it's converting to 24 volts, not 12 volts, I'm gonna have to convert the entire house over from 12V DC to 24V DC, including changing all the light fixtures and rewiring the battery bank...

14 June 2017

Joey Hess: not tabletop solar

Borrowed a pickup truck today to fetch my new solar panels. This is 1 kilowatt of power on my picnic table. solar panels on picnic table

5 May 2017

Joey Hess: announcing debug-me

Today I'm excited to release debug-me, a program for secure remote debugging.
Debugging a problem over email/irc/BTS is slow, tedious, and hard. The developer needs to see your problem to understand it. Debug-me aims to make debugging fast, fun, and easy, by letting the developer access your computer remotely, so they can immediately see and interact with the problem. Making your problem their problem gets it fixed fast. debug-me session is logged and signed with the developer's GnuPG key, producing a chain of evidence of what they saw and what they did. So the developer's good reputation is leveraged to make debug-me secure.
I've recorded a short screencast demoing debug-me. And here's a screencast about debug-me's chain of evidence. The idea for debug-me came from Re: Debugging over email, and then my Patreon supporters picked debug-me in a poll as a project I should work on. It's been a fun month, designing the evidence chain, building a custom efficient protocol with space saving hacks, using websockets and protocol buffers and ed25519 for the first time, and messing around with low-level tty details. The details of debug-me's development are in my devblog. Anyway, I hope debug-me makes debugging less of a tedious back and forth, at least some of the time. PS: Since debug-me's protocol lets multiple people view the same shell session, and optionally interact with it, there could be uses for it beyond debugging, including live screencasting, pair programming, etc. PPS: There needs to be a debug-me server not run by me, so someone please run one..

27 April 2017

Joey Hess: Exede Surfbeam 2

My new satellite internet connection is from Exede, connecting to the ViaSat 1 bird in geosync orbit. A few technical details that I've observed follow. antagonistic by design The "Surfbeam 2 wifi modem" is a closed proprietary system. That is important because it's part of Exede's bandwidth management system. The Surfbeam tracks data use and sends it periodically to Exede. When a user has gone over their monthly priority data, Exede then throttles the bandwidth in various ways -- this throttling seems to be implemented, at least partially on the Surfbeam itself. (Perhaps by setting QoS flags?) So, if a user could hack their Surfbeam, they could probably bypass the bandwidth caps, or at least some of them. Perhaps Exede would notice eventually. Of course, doing so would surely violate the Exede TOS. If you're renting the modem, like I am, hacking a device you don't own might also subject you to criminal penalties. Needless to say, I don't plan to hack the SurfBeam. But it's been hacked before. So, this is a device that lives in people's homes and is antagonistic to them by design. weird data throttling The way the Surfbeam reports data use back to Exede periodically and gets throttling configured has some odd effects sometimes. For example, the Surfbeam can be in throttled state left-over from the previous billing month. When a new billing month begins, it can remain throttled for some time (up to multiple hours) until it sends an update to Exede and they un-throttle it. Data downloaded at that time might still be counted as priority data even though it was throttled. I've seen some good indications of that happening, but am not sure yet. But, I've decided that the throttling doesn't matter for me. Why? ViaSat 1 has many spot beams, and the working-class beam I'm in (most of it is in eastern Kentucky) does not seem to get a lot of use between 7 am and 4:30 pm weekdays. Even when throttled, I often get 300 kb/s - 1 mb/s speeds during the day, which is not a lot worse than the ~2.5 mb/s peak when unthrottled. And that's the time when I want to use broadband -- when the sun is shining and I'm at home at work/play. I'm probably going to switch to a cheaper plan with less priority data, because the priority data is not buying me much. This is a big change from the old FAP which rendered the satellite no faster than dialup. a whole network in there Looking at the ports open on the Surfbeam, some very strange things turned up. First, there are not one, not two, but three separate IPs used by the device, and there are at least two and perhaps three distinct computers involved. There are a lot of flickering LEDs inside the box; a whole network in there. 192.168.100.1 is the satellite controller. It's a Linux box, fingerprinted as kernel 3.10 or so (so full of security holes presumably), and it's running thttpd/2.25b (doesn't seem to have any known holes). It seems to have ssh and snmp, but with some port filtering that prevents access. (Note that the above exploit video confirms that snmp is running.) Some machine parsable data is exposed at http://192.168.100.1/index.cgi?page=modemStatusData and http://192.168.100.1/index.cgi?page=triaStatusData. (See (SurfStat program) 192.168.1.1 is the wifi router. It has a dns server, an icslap proxy, and nmap thinks it's Linux 3.x with Synology DiskStation Manager (probably the latter is a false positive?) It has its own separate web server for configuration, which is not thttpd. I'm fairly sure this is a separate processor from the other IP address. 192.168.100.2 responds to ICMP, but has no open ports at all. However, it seems to have filtered ssh, telnet, msrpc, microsoft-ds, and port 8090 (probably http), so perhaps it's running all that stuff. This one is definitely a separate processor, located in the Satellite dish's TRIA (transmit receive integrated assembly). Verified by disconnecting the dish's coax cable and being unable to ping it. lack of source code for GPLed software Exede did not provide anything to me about the GPL licensed source code on the Surfbeam 2. I'm not sure if they're legally required to do so in my case, since they're renting it to me? But, they do let you buy the device too, so it's interesting that nothing mentions the licenses of its software and where to get the source code. I'll try to buy it and ask for the source code eventually. power use The Surfbeam pulls as much power as two beefy laptops, but it is beaming signals thousands of miles into space, so I give it a bit of a pass. I want to find a way to power it from DC power, but have not had any success with that so far, so am wasting more power to run it on an inverter. Its power supply is rated at 30v and 2.5 amps. Interestingly, I've seen a photo of another Surfbeam power supply that only pulls 1.5 amps. This and a different case with fewer (external) LEDs makes me think perhaps the installer stuck me with an old and less efficient version. Maybe they integrated some of those processors into a single board in a new version and perhaps made it waste less power as heat. Mine gets pretty warm. I'm currently only able to run it approximately one hour per hour of sun collected by the house. Still on dialup the rest of the time. With the ability to get on broadband when dialup is being painful, being on dialup the rest of the time is perfectly ok. Of course it helps that with tools like git-annex, I'm very well tuned to popping up onto broadband briefly and making it count.

17 April 2017

Sylvain Beucler: Practical basics of reproducible builds 3

On my quest to generate reproducible standalone binaries for GNU FreeDink, I met new friends but currently lie defeated by an unexpected enemy... Episode 1: Episode 2:
First, the random build differences when using -Wl,--no-insert-timestamp were explained.
peanalysis shows random build dates:
$ reprotest 'i686-w64-mingw32.static-gcc hello.c -I /opt/mxe/usr/i686-w64-mingw32.static/include -I/opt/mxe/usr/i686-w64-mingw32.static/include/SDL2 -L/opt/mxe/usr/i686-w64-mingw32.static/lib -lmingw32 -Dmain=SDL_main -lSDL2main -lSDL2 -lSDL2main -Wl,--no-insert-timestamp -luser32 -lgdi32 -lwinmm -limm32 -lole32 -loleaut32 -lshell32 -lversion -o hello && chmod 700 hello && analysePE.py hello   tee /tmp/hello.log-$(date +%s); sleep 1' 'hello'
$ diff -au /tmp/hello.log-1*
--- /tmp/hello.log-1490950327   2017-03-31 10:52:07.788616930 +0200
+++ /tmp/hello.log-1523203509   2017-03-31 10:52:09.064633539 +0200
@@ -18,7 +18,7 @@
 found PE header (size: 20)
     machine: i386
     number of sections: 17
-    timedatestamp: -1198218512 (Tue Jan 12 05:31:28 1932)
+    timedatestamp: 632430928 (Tue Jan 16 09:15:28 1990)
     pointer to symbol table: 4593152 (0x461600)
     number of symbols: 11581 (0x2d3d)
     size of optional header: 224
@@ -47,7 +47,7 @@
     Win32VersionValue: 0
     size of image (memory): 4640768
     size of headers (offset to first section raw data): 1536
-    checksum (for drivers): 4927867
+    checksum (for drivers): 4922616
     subsystem: 3
         win32 console binary
     DllCharacteristics: 0
Stephen Kitt mentioned 2 simple patches (1 2) fixing uninitialized memory in binutils. These patches fix the variation and were submitted to MXE (pull request).
Next was playing with compiler support for SOURCE_DATE_EPOCH (which e.g. sets __DATE__ macros).
The FreeDink DFArc frontend historically displays a build date in the About box:
    "Build Date: %s\n", ..., __TDATE__
sadly support is only landing upstream in GCC 7 :/
I had to remove that date.
Now comes the challenging parts. All my tests with reprotest checked. I started writing a reproducible build environment based on Docker (git browse).
At first I could not run reprotest in the container, so I reworked it with SSH support, and reprotest validated determinism.
(I also generate a reproducible .zip archive, more on that later.) So far so good, but were the release identical when running reprotest successively on the different environments?
(reminder: this is a .exe build that is insensitive to varying path, hence consistent in a full reprotest)
$ sha256sum *.zip
189d0ca5240374896c6ecc6dfcca00905ae60797ab48abce2162fa36568e7cf1  freedink-109.0-bin-buildsh.zip
e182406b4f4d7c3a4d239eee126134ba5c0304bbaa4af3de15fd4f8bda5634a9  freedink-109.0-bin-docker.zip
e182406b4f4d7c3a4d239eee126134ba5c0304bbaa4af3de15fd4f8bda5634a9  freedink-109.0-bin-reprotest-docker.zip
37007f6ee043d9479d8c48ea0a861ae1d79fb234cd05920a25bb3db704828ece  freedink-109.0-bin-reprotest-null.zip
Ouch! Even though both the Docker and my host are running Stretch, there are differences.
For the two host builds (direct and reprotest), there is a subtle but simple difference: HOME.
HOME is invariably non-existant in reprotest, while my normal compilation environment has an existing home (duh!). This caused a subtle bug when cross-compiling with mingw and wine-binfmt:
  checking whether we are cross compiling... no
  checking whether we are cross compiling... yes
The respective binaries were very different notably due to a different config.h.
This can be fixed by specifying --build in addition to --host when calling ./configure. I suggested reprotest have one of the tests with a valid HOME (#860428).
Now comes the big one, after the fix I still got:
$ sha256sum *.zip
3545270ef6eaa997640cb62d66dc610a328ce0e7d412f24c8f18fdc7445907fd  freedink-109.0-bin-buildsh.zip
cc50ec1a38598d143650bdff66904438f0f5c1d3e2bea0219b749be2dcd2c3eb  freedink-109.0-bin-docker.zip
3545270ef6eaa997640cb62d66dc610a328ce0e7d412f24c8f18fdc7445907fd  freedink-109.0-bin-reprotest-chroot.zip
cc50ec1a38598d143650bdff66904438f0f5c1d3e2bea0219b749be2dcd2c3eb  freedink-109.0-bin-reprotest-docker.zip
3545270ef6eaa997640cb62d66dc610a328ce0e7d412f24c8f18fdc7445907fd  freedink-109.0-bin-reprotest-null.zip
There is consistency on my host, and consistency within docker, but both are different.
Moreover, all the .o files were identical, so something must have gone wrong when compiling the libs, that is MXE. After many checks it appears that libstdc++.a is different.
Just overwriting it gets me a consistent FreeDink release on all environments.
Still, when rebuilding it (make gcc), libstdc++.a always has the same environment-dependent checksum.
45f8c5d50a68aa9919ee3602a4e3f5b2bd0333bc8d781d7852b2b6121c8ba27b  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a  # host
6870b84f8e17aec4b5cf23cfe9c2e87e40d9cf59772a92707152361b6ebc1eb4  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a  # docker
The 2 libraries are much different, there's barely any blue in hexcompare. At that time, I realized that the Docker "official" Debian are not really official, as Joey explains.
Could it be that Docker maliciously tampered with the compiler as Ken Thompson warned in 1984?? Well before jumping to conclusion let's mix & match.
$ sudo env -i /usr/sbin/chroot chroot-docker/
$ exec bash -l
$ cd /opt/mxe
$ touch src/gcc.mk
$ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a 
6870b84f8e17aec4b5cf23cfe9c2e87e40d9cf59772a92707152361b6ebc1eb4  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
$ make gcc
[build]     gcc                    i686-w64-mingw32.static
[done]      gcc                    i686-w64-mingw32.static                                 2709464 KiB    7m2.039s
$ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a 
45f8c5d50a68aa9919ee3602a4e3f5b2bd0333bc8d781d7852b2b6121c8ba27b  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
# consistent with host builds
$ sudo tar -C chroot -c .   docker import - chroot-debootstrap
$ docker run -ti chroot-debootstrap /bin/bash
$ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
45f8c5d50a68aa9919ee3602a4e3f5b2bd0333bc8d781d7852b2b6121c8ba27b  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
$ touch src/gcc.mk
$ make gcc
[build]     gcc                    i686-w64-mingw32.static
[done]      gcc                    i686-w64-mingw32.static                                 2709412 KiB    7m6.608s
$ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
6870b84f8e17aec4b5cf23cfe9c2e87e40d9cf59772a92707152361b6ebc1eb4  /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a
# consistent with docker builds
So, AFAICS when building with: then depending on whether running in a container or not we get a consistent but different libstdc++.a. This kind of issue is not detected with a simple reprotest build, as it only tests variations within a fixed build environment.
This is quite worrisome, I intend to use a container to control my build environment, but I can't guarantee that the container technology will be exactly the same 5 years from now. All my setup is simple and available for inspection at https://git.savannah.gnu.org/cgit/freedink.git/tree/autobuild/freedink-w32-snapshot/. I'd very much welcome enlightenment :)

11 April 2017

Joey Hess: starting debug-me and a new devblog

I've started building debug-me. It's my birthday, and building a new program is kind of my birthday gift to myself, because I love starting a new program and seeing where it goes. (Also, my Patreon backers wanted me to get on with building debug-me.) I also have a new devblog! Up until now, I've had a devblog that only covered work on git-annex. That one continues, but the new devblog is for development journaling for any project I'm working on. http://joeyh.name/devblog/

1 April 2017

Antoine Beaupr : My free software activities, February and March 2017

Looking into self-financing Before I begin, I should mention that I started tracking my time working on free software more systematically. I spend a lot of time on the computer, as regular readers of this blog might remember so I wanted to know exactly how much time was paid vs free work. I was already using org-mode's time clock system to keep track of my work hours, so I just extended this to my regular free software contributions, which also helps in writing those reports. It turns out that over 60% of my computer time is spent working on free software. That's huge! I was expecting something more along the range of 20 to 40% of my time. So I started thinking about ways of financing this work. I created a Patreon page but I'm hesitant into launching such a campaign: the only thing worse than "no patreon page" is "a patreon page with failed goals and no one financing it". So before starting such an effort, I'd like to get a feeling of what other people's experience with it are. I know that joeyh is close to achieving his goals, but I can't compare with the guy that invented git-annex or debhelper, so I'm concerned I wouldn't be able to raise the same level of funding. So any advice you have, feel free to contact me in private or in the comments. If you would be ready to fund my work, I'd love to know about it, obviously, but I guess I wouldn't get real numbers until I actually open up such a page... Now, onto the regular report.

Wallabako I spent a good chunk of time completing most of the things I had in mind for Wallabako, which I mentioned quickly in the previous report. Wallabako is now much easier to installed, with clearer instructions, an easier to use configuration file, more reliable synchronization and read status propagation. As usual the Wallabako README file has all the details. I've also looked at better integration with Koreader, the free software e-reader that forms the basis of the okreader free software distribution which has been able to port Debian to the Kobo e-readers, a project I am really excited about. This project has the potential of supporting Kobo readers beyond the lifetime that upstream grants it and removes a lot of proprietary software and spyware that ships with the Kobo readers. So I have made a few contributions to okreader and also on koreader, the ebook reader okreader is based on.

Stressant I rewrote stressant, my simple burn-in and stress-testing tool. After struggling in turn with Debirf, live-build, vmdebootstrap and even FAI, I just figured maybe it wasn't the best idea to try and reinvent that particular wheel: instead of reinventing how to build yet another Debian system build tool, maybe I should just reuse what's already there. It turns out there's a well known, succesful and fairly complete recovery system called Grml. It is a Debian Derivative, so all I needed to do was to stop procrastinating and actually write the actual stressant tool instead of just creating a distribution with a bunch of random tools shipped in. This allowed me to focus on which tools were the best to stress test different components. This selection ended up being: fio can also be used to overwrite disk drives with the proper options (--overwrite and --size=100%), although grml also ships with nwipe for wiping old spinning disks and hdparm to do a secure erase of SSD disks (whatever that's worth). Stressant still needs to be shipped with grml for this transition to be complete. In the meantime, I was able to configure the excellent public Gitlab CI service to provide ISO images with Stressant built-in as a stopgap measure. I also need to figure out a way to automate starting stressant from a boot menu to automate deployments on a larger scale, although because I have little need for the feature at this moment in time, this will likely wait for a sponsor to show up for this to be implemented. Still, stressant has useful features like the capability of sending logs by email using a fresh new implementation of the Python SMTPHandler (BufferedSMTPHandler) which waits for logging to complete before sending a single email. Another interesting piece of code in there is the NegateAction argparse handler that enables the use of "toggle flags" (e.g. --flag / --no-flag). I'm so happy with the code that I figure I could just share it here directly:
class NegateAction(argparse.Action):
    '''add a toggle flag to argparse

    this is similar to 'store_true' or 'store_false', but allows
    arguments prefixed with --no to disable the default. the default
    is set depending on the first argument - if it starts with the
    negative form (define by default as '--no'), the default is False,
    otherwise True.
    '''
    negative = '--no'
    def __init__(self, option_strings, *args, **kwargs):
        '''set default depending on the first argument'''
        default = not option_strings[0].startswith(self.negative)
        super(NegateAction, self).__init__(option_strings, *args,
                                           default=default, nargs=0, **kwargs)
    def __call__(self, parser, ns, values, option):
        '''set the truth value depending on whether
        it starts with the negative form'''
        setattr(ns, self.dest, not option.startswith(self.negative))
Short and sweet. I wonder why stuff like this is not in the standard library yet - maybe just because no one bothered yet? It'd be great to get feedback of more experienced Pythonistas on this one. I hope that my work on Stressant is complete. I get zero funding for this work, and have little use for it myself: I manage only a few machines and such a tool really shines when you regularly put new hardware online, which is (fortunately?) not my case anymore. I'd be happy, of course, to accompany organisations and people that wish to further develop and use such a tool. A short demo of stressant as well as detailed description of how it works is of course available in its README file.

Standard third party repositories After looking at improvements for the grml repository instructions, I realized there was no real "best practices" document on how to configure an Apt repository. Sure, there are tools like reprepro and others, but those hardly qualify as policy: they are very flexible and there are lots of ways to create insecure repositories or curl sh style instructions, which we of course generally want to avoid. While the larger problem of Unstrusted Debian packages remain generally unsolved (e.g. when you install any .deb file, it can get root on your system), it seemed to me one critical part of this problem was how to add a random third-party repository to your machine while limiting, as much as possible, what possible attackers could do with such a repository. In other words, to solve the more general problem of insecure .deb files, we also need to solve the distribution problem, otherwise fixing the .deb files themselves will be useless. This lead to the creation of standardized repository instructions that define:
  1. how to distribute the repository's public signing key (ie. over HTTPS)
  2. how to name suites and components (e.g. use stable and main unless you have a good reason, and explain yourself)
  3. recommend a healthy does of apt preferences pinning
  4. how to distribute keys (e.g. with a derive-archive-keyring package)
I've seen so many third party repositories get this wrong. For example, a lot of repositories recommend this type of command to intialize the OpenPGP trust path:
curl http://example.com/key.asc   apt-key add -
This has the following problems:
  • the key is transfered in plaintext and can easily be manipulated by an active attacker (e.g. a router on your path to the server or a neighbor in a Wifi cafe)
  • the key is added to the main trust root, which allows the key to authentify as the real Debian archive, therefore giving it all rights over all packages
  • since it's part of the global archive, it's difficult for a package to remove/add the key when a key rollover is necessary (and repositories generally don't provide a deriv-archive-keyring to do that process anyways)
An example of this are the Docker install instructions that, at least, manage to do this over HTTPS. Some other repositories don't even bother teaching people about the proper way of adding those keys. We settled for:
wget -O /usr/share/keyrings/deriv-archive-keyring.gpg https://deriv.example.net/debian/deriv-archive-keyring.gpg
That location was explicitly chosen to be out of the main trust directory, so that it needs to be explicitly added to the sources.list as well:
deb [signed-by=/usr/share/keyrings/deriv-archive-keyring.gpg] https://deriv.example.net/debian/ stable main
Similarly, we highly recommend users setup "apt pinning" to restrict what a given repository can do. Since pinning is so confusing, most people don't actually bother even configuring it and I have yet to see a single repo advise its users to configure those preferences, which are essential to limit what a repository can do. To keep configuration simple, we recommend this:
Package: *
Pin: origin deriv.example.net
Pin-Priority: 100
Obviously, for a single-package repository, the actual package name should be listed, e.g.:
Package: foo
Pin: origin deriv.example.net
Pin-Priority: 100
And the priority should probably be set to 1 unless you want to allow automatic upgrades. It is my hope that this design will get more traction in the years to come and become a de-facto standard that will be a key part in safely adding third party repositories. There is obviously much more work to be done to improve security when installing untrusted .deb files, and I encourage Debian developers to consider contributing to the UntrustedDebs discussions and particularly to the Teams/Dpkg/Spec/DeclarativePackaging work.

Signal R&D I spent a significant amount of time this month struggling with the Signal project on my phone. I'm still ambivalent on Signal: it's a centralized designed, too dependent on phone numbers, but I must admit they get a lot of things right and it's the only free-software platform that allows for easy-to-use, multi-platform videoconferencing that my family can use. I've been following Signal for a while: up until now, I had been using the LibreSignal rebuild of the official client, as it is distributed on a F-Droid repository. Because I try to avoid Google (proprietary) software on my phone, it's basically the only way I could even install Signal. Unfortunately, the repository is out of date and introduces another point of trust in the distribution model: now you not only need to trust the Signal authors to do the right thing, you also need to trust that F-Droid repo not to inject nasty code on your phone. I've therefore started a discussion about how Signal could be distributed outside of the Google Play Store. I'd like to think it's one of the things that led the Signal people to distribute an official copy of Signal outside of the playstore. After much struggling, I was able to upgrade to this official client and will be able to upgrade easily by just downloading the APK. (Do note that I ended up reinstalling and re-registering Signal, which unfortunately changed my secret keys.) I do hope Signal enters F-Droid one day, but it could take a while because it still doesn't work without Google services and barely works with MicroG, the free software alternative to the Google services clients. Moxie also set a list of requirements like crash reporting and statistics that need to be implemented on F-Droid's side before he agrees to the deployment, so this could take a while. I've also participated in the, ahem, discussion on the JWZ blog regarding a supposed vulnerability in Signal where it would leak previously unknown phone numbers to third parties. I reviewed the way the phone number is uploaded and, while it's possible to create a rainbow table of phone numbers (which are hashed with a truncated SHA-1 checksum), I couldn't verify the claims of other participants in the thread. For me, Signal still does the right thing with contacts, although I do question the way "read status" notifications get transmitted, but that belong in another bug report / blog post.

Debian Long Term Support (LTS) It's been more than a year working on Debian LTS, started by Raphael Hertzog at Freexian. I didn't work much in February so I had a lot of hours to catchup with, and was unfortunately unable to do so, partly because I was busy with other projects, and partly because my colleagues are doing a great job at resolving the most important issues. So one my concerns this month was finding work. It seemed that all the hard packages were either taken (e.g. my usual favorites, tiff and imagemagick, we done by others) or just too challenging (e.g. I don't feel quite comfortable tackling the LTS branch of the Linux kernel yet). I spent quite a bit of time trying to figure out what was wrong with pcre3, only to realise the "32" in the report was not about the architecture, but about the character width. Because of thise, I marked 4 CVEs (CVE-2017-7186, CVE-2017-7244, CVE-2017-7245, CVE-2017-7246) as "not-affected", since the 32-bith character support wasn't enabled in wheezy (or jessie, for that matter). I still spent some time trying to reproduce the issues, which require a compiler with an AddressSanitizer, something that was introduced in both Clang and GCC after Wheezy was released, which makes reproducing this fairly complicated... This allowed me to experiment more with Vagrant, however, and I have provided the Debian cloud team with a 32-bit Vagrant box that was merged in shortly after, although it doesn't show up yet in the official list of Debian images. Then I looked at the apparmor situation (CVE-2017-6507), Debian bug #858768). That was one tricky bug as well, since it's not a security issue in apparmor per se, but more an issue with things that assume a certain behavior from apparmor. I have concluded that Wheezy was not affected because there are no assumptions of proper isolation there - which are provided only starting from LXC 1.0 - and Docker is not in Wheezy. I also couldn't reproduce the issue on Jessie, but, as it turns out, the issue was sysvinit-specific, which is why I couldn't reproduce it under the default systemd configuration shipped with Jessie. I also looked at the various binutils security issues: as I reported on the mailing list, I didn't see anything serious enough in there to warrant a a security release and followed the lead of both the stable and Red Hat security teams by marking this "no-dsa". I similiarly reviewed the mp3splt security issues (specifically CVE-2017-5666) and was fairly puzzled by that issue, which seems to be triggered only the same address sanitization extensions than PCRE, although there was some pretty wild interplay with debugging flags in there. All in all, it seems we can't reproduce that issue in wheezy, but I do not feel confident enough in the results to push that issue aside for now. I finally uploaded the pending graphicsmagick issue (DLA-547-2), a regression update to fix a crash that was introduced in the previous release (DLA-547-1, mistakenly named DLA-574-1). Hopefully that release should clear up some of the confusion and fix the regression. I also released DLA-879-1 for the CVE-2017-6369 in firebird2.5 which was an interesting experiment: I couldn't reproduce the issue in a local VM. After following the Ubuntu setup tutorial, as I wasn't too familiar with the Firebird database until now (hint: the default username and password is sysdba/masterkey), I ended up assuming we were vulnerable and just backporting the patch after seeing the jessie folks push out a release just in case. I also looked at updating the ca-certificates package to deal with the pending WoSign/Startcom removal: I made an explicit list of the CAs that need to be removed after reviewing the Mozilla list. I also sent a patch for an unrelated issue where ca-certificates is writing to /usr/local (!!) in Debian bug #843722. I have also done some "meta" work in starting a discussion about fixing the missing DLA links in the tracker, as you will notice all of the above links lead to nowhere. Thanks to pabs, there are now some links but unfortunately there are about 500 DLAs missing from the website. We also discussed ways to Debian bug #859123, something which is currently a manual process. This is now in the hands of the excellent webmaster team. I have also filed a few missing security bugs (Debian bug #859135, Debian bug #859136), partly because I wanted to help the security team. But it turned out that I felt the script needed some improvements, so I submitted a patch to improve the script so it is easier to run.

Other projects As usual, there's the usual mixed bags of chaos: More stuff on Github...

16 March 2017

Joey Hess: end of an era

I'm at home downloading hundreds of megabytes of stuff. This is the first time I've been in position of "at home" + "reasonably fast internet" since I moved here in 2012. It's weird! Satellite internet dish with solar panels in foreground While I was renting here, I didn't mind dialup much. In a way it helps to focus the mind and build interesting stuff. But since I bought the house, the prospect of only dialup at home ongoing became more painful. While I hope to get on the fiber line that's only a few miles away eventually, I have not convinced that ISP to build out to me yet. Not enough neighbors. So, satellite internet for now. 9.1 dB SNR speedtest results: 15 megabit down / 4.5 up with significant variation Dish seems well aligned, speed varies a lot, but is easily hundreds of times faster than dialup. Latency is 2x dialup. The equipment uses more power than my laptop, so with the current solar panels, I anticipate using it only 6-9 months of the year. So I may be back to dialup most days come winter, until I get around to adding more PV capacity. It seems very cool that my house can capture sunlight and use it to beam signals 20 thousand miles into space. Who knows, perhaps there will even be running water one day. Satellite dish

Next.

Previous.